This website uses cookies to store information on your computer. Some of these cookies are used for visitor analysis, others are essential to making our site function properly and improve the user experience. By using this site, you consent to the placement of these cookies. Click Accept to consent and dismiss this message or Deny to leave this website. Read our Privacy Statement for more.
About Us | Contact Us | Print Page | Sign In | Join now
News & Press: News

Perpetually intensifying AI bot attack

26 August 2025  
"Perpetually intensifying AI bot attack"

Gordon Johnstone portrait

Our metaphors fail to describe how AI works or what damage it does. But Gordon Johnstone, a speaker at the Green Libraries Conference says aggressive marketing and zero transparency are the immediate problems.

SCOTLAND’s AI strategy was published back in 2021 to put Scotland at the forefront of trustworthy, ethical, and inclusive artificial intelligence. The goal was also to make Scotland a more prosperous, greener and more outward looking country.

“Obviously the world has changed a lot since 2021” says Gordon Johnstone, Communications and events manager at the Scottish AI Alliance – the delivery body for Scotland’s AI strategy.

“Chat GPT was released by Open AI in 2022 which didn’t change the strategy - because the strategy is tool agnostic - but it changed our approach. It’s increasingly important for people to understand the role AI plays in their day to day lives and how it impacts them.”

Work done by the Scottish AI Alliance includes the development of the course Living with AI, a non-technical beginner’s course introducing you to the social and ethical elements of AI and how it impacts you day-to-day.

Another tool is the Scottish AI playbook created to help businesses. “We recently refreshed it,” Gordon says, “and it may be of interest to your members – like the AI jargon Buster, which goes through all the common terms that you’re going to encounter when you’re dealing with AI.”

Bot attack

Back in March CILIP members and suppliers noticed such a surge in AI bots in their library management systems – one report (by CILIP Supplier Partner Open Fifth) putting the ratio of legitimate queries to AI bots at 19 : 75,000 in their LMS – that they thought libraries were being specifically targeted. It has led to systems ‘running hot’, using more power, and if it persists could require more and bigger servers.

This, alongside the over-abundance of publisher AI search tools, copyright and transparency concerns, ethics and the environmental concerns – and many other day-to-day challenges posed by AI, leaves some wondering when the positives of AI will outweigh the negatives.

“AI could have an amazing positive impact on the world, but that’s not inevitable,” Gordon says, “An AI future is unavoidable, but a positive AI future is not inevitable.”

He said: “The more you understand AI, the more sceptical you become because it’s obvious that so much of the discussion about AI is marketing talk. It’s hype.” And then, under the hype there are huge risks and disruptions – from the death of a free public internet to the eroding nation states.

However, “AI hasn’t really invented any new problems, it’s exasperating and speeding up existing ones. Bots crawling websites for content and information has been an issue since the dawn of the internet but now it’s happening on an unfathomable scale. And every time you find a specific solution, some clever person finds a way around it. But it’s a problem that every industry is facing right now.”

Losing ground

He doubts that traditional power structures will fix this. “In Scotland are we really influencing the decision making of Microsoft or Open AI? We’ve worked with both companies. All lovely people. But are they then going back to Sam Altman or Bill Gates and saying, “Hey guys, we should really be thinking about the ethics behind what we’re doing?” As much as I would love for that to be the case, that doesn’t seem likely. This is obviously getting very speculative, but we could be entering a point where the nation state will be eroded. The biggest decisions will be reserved for the corporations that control access to data, or access to resources.”

Currently the Scottish AI strategy advocates for any conversations around AI to be viewed through a lens of ethics and human rights. “There is a strong push in Scotland to make AI used in a public sector more transparent. We have the AI register, which is another government programme that logs all the AI tools being used in the Scottish public sector, also the ones that are in development. It details what they are and what they’re used for. A good evolution of that would be a rating system, a red/amber/green approach to how transparent and how safe these tools are to use.

“The public sector is getting particular attention from the Scottish government in terms of AI use. And Scotland is taking a kind of human-rights-first approach to AI through the ethical trustworthy and inclusive, strategy. England is a bit more innovation focussed, regulation lite, to promote growth. It’s as if it wanted to go very, very fast, very quickly, to keep up with the likes of the US and China. And the Scottish government was taking a slightly slower approach, trying to put in guardrails and frameworks that people could use.

But he says comments from the Scottish Government suggest that this competitive approach is winning over the Scottish decision-makers. “It could be argued the Scottish government is trying to align itself a little bit more with the UK government. Richard Lochhead, minister responsible for AI, recently said at one of our events that we should stop worrying so much about AI and start to embrace its possibilities and it potential. That’s maybe an indication of what’s going to come post March 2026.”

Anthropomorphisation

Alongside these macro political gains made by AI, it is also has traction at a person-to-person level. “The anthropomorphising of AI has been fascinating to watch – fuelled entirely by the people selling AI tools. If you ask ChatGPT to write you an academic essay it might include references to books and authors that don’t exist. They call that hallucination – a deeply human experience. In effect the marketing has turned AI into a seemingly intelligent being that has made an understandable mistake. But it’s not that at all. It’s simply a flawed machine. I think that we’re going to see a lot more of this anthropomorphisation of language around AI as well-known tools become more integrated.”

“Training is also an inadequate word,” Gordon says, “There will be a constant thirst for information and data to make sure that AI models are as up-to-date and cutting edge and knowledgeable as possible. You can’t do that just by training your data once and letting it go, especially in a world where we’re creating such an unfathomable amount of data every single day. It’s a problem that will intensify rather than dissipate. That’s how capitalism works. People don’t stop earning money when they have enough. It’s the same with AI models. If there is new data to be trained on, they are going to want it.”

In the end he said, “This might lead to the siloing of data, keeping things off grid, so it can’t be scraped or stolen. It protects your IP but goes against what the internet was meant to be. It would be sad to go back to a world where information becomes gated again. But smaller, medium language models might become more prevalent in the future where people create these models based entirely on their own verifiable data in a closed system. I can imagine that being useful for libraries – it could make the data libraries own very powerful.”

Transparency

He is similarly sceptical about the lack of transparency from big tech, especially around its environmental impact: “When it comes to the biggest players in the world, like Open AI or Microsoft, or Apple, they are usually very cagey about the environmental impact of what they do because it’s pretty bad PR.”

He said they provide details on their environmental offsetting strategies but less about what it is they’re offsetting: “They talk a lot about mitigation without really going into the damage AI is causing to the environment.”

He said that Microsoft carbon emissions had increased by 30 per cent since 2020 and Google’s by 48 per cent since 2019 and that Goldman Sachs believes data centres will use eight per cent of US total power by 2030, up from three per cent in 2023.

And while Microsoft is one of the biggest purchasers of carbon credits – 83 million tonnes worth – he pointed out there were concerns about whether carbon offsetting worked practically and ethically. However, his main concern was that the other side of the equation – how the type and scale of environmental damage being done is being identified and measured – was missing. IP approached Microsoft for comment but has not received a response.

“If we don’t know the harm AI is doing we can’t possibly know if these companies’ efforts are meaningful or simply performative,” he said. “What I would like to see is the full supply chain environmental impact of AI – from data centre construction, data scraping and training and staffing.”

He said this also needed to take into account the impact of AI use by end-users – and the by-products of marketing efforts by AI organisations, like the use AI recreationally, which he thinks wouldn’t happen if it wasn’t marketed so heavily, or the impact was understood.

But also whether the carbon footprint of AI should include the mitigation forced on other organisations – like libraries – when they have to defend their systems and stop them being swamped with AI bots.

Metaphorical communication

One challenge is maintaining public attention. “The problem is that for a long time, we’ve spoken about environmental impact in terms of metaphors. We’ve likened it to X number of homes, or cars, or football fields of garbage. But I don’t understand the significance of a tonne of carbon, or what it means to make a car. Unless I’m well versed in the environmental impact and the atmospheric impact of carbon, I won’t know the difference between a tonne of carbon or a million tonnes of carbon.”

“We do talk about environmental impact in the strategy and it’s a big part of our literacy efforts,” he said, adding “We just finished a project with the Glasgow School of Art about how to visualise the environmental impact of AI.” But having a real feel for the impact of Chat GPT 3 is important because ChatGPT 4 is maybe 10,000 times more complex than Chat GPT 3 – making it potentially many thousands of times more damaging to the environment.

Gordon doubts human brains deal well with big one-off shocks like this and that realistic options might be something like a real-time carbon footprint gauge: “A while ago there was a tool that tallied up your carbon usage as you spoke to Chat GPT. I think it was very preliminary research, but it showed that if people could see the carbon impact they were having, they were less likely to use it for extended periods of time. So that could be a way of mitigating all this. If organisations like open AI and Microsoft are obligated to show you the carbon output of your AI usage, that might help people make smarter choices in terms of when to use it and when it’s not necessary.”

The information vacuum plays into what he sees as the “the goal (of the AI owners), which is to make it a ubiquitous part of life, so you stop questioning the decisions that are being made around it. They want you to get to a point where organisations say they’re implementing AI and it feels completely natural. You’ll say “Well, of course you are”, you won’t ask “Why do you need to?”

Libraries targeted?

The anthropomorphic language around AI might explain why librarians feel targeted but Gordon says “It’s not something to take personally. Information rich repositories will simply have more requests because there is more data to train on. But while libraries aren’t being specifically targeted, it doesn’t mean they aren’t specifically disadvantaged.

“For private companies, yes, their data is very valuable, but they exist to extract profit. For libraries a massive part of the reason they exist is the data and the knowledge they hold and the ease with which it can be shared. So libraries might find themselves at a disadvantage if all the data, knowledge, and academic research is taken out of their control and put into AI models.”

“When it comes to the risks that data scraping poses, I don’t think you’re ever going to be able to remove it entirely, but I think it’ll be a case of mitigation as opposed to a complete defensive plan. There may be systems out there that are completely safe, but I don’t imagine libraries have that kind of budget. It will be a case of trying to maintain best practice, but I don’t specifically know what that’s going to be.”

Library role

“Artificial intelligence isn’t new, but the scale of the impact it’s having on the world is. While its risks are well documented and often discussed, the various technologies that make up the AI ecosystem could have huge benefits to the library and information sectors so long as they are implemented and procured responsibly and ethically. It is incumbent on all of us in the sector to be AI-literate to understand the provenance of the tools we’re using and ensure they adhere to the privacy and safety guidelines of our organisations.

“AI is something to be understood rather than scared of. Library and information professionals have always been at the forefront of harnessing technological advancements and this new era we’re entering is likely to be the same. In many ways it simply represents the continuation of the move towards the automation of mundane tasks which previously required human intervention. When used responsibly it can save you time, money, and effort in many aspects of your job, but it’s imperative to remember the core tenet of how AI works – if you put rubbish in, you’ll get rubbish out.


Published: 7 August 2025


More from Information Professional

News

In depth

Interview

Insight

This reporting is funded by CILIP members. Find out more about the

Benefits of CILIP membership

Sign up for our fortnightly newsletter